Load the data from Study 2.
Each row is a different participant in Study 2 (there were 80). Columns 1-68 are variables that summarise the whole session. Columns 69-493 refer to events in each round of the Visible condition. Columns 494-918 refer to the Hidden condition. This experiment followed a within-subjects design (i.e. participants completed both the Hidden and Visible conditions). In the counterbalancing column, 0 means that the participant played the Visible game before the Hidden game, and 1 means that the participant played the Hidden game before the Visible game.
Trim the dataset and get it in long-format.
As in Study 1, we first test whether the probability of a shock occurring differs between the two conditions. Although this experiment followed a within-subjects design, random effects are not necessary here because the occurrence of shocks are independent between the two conditions.
All analyses in this document will use the brm() function. Learn more about the brms package here.
## function(d2.01) {
## brm(data = d2.01, family = binomial,
## overall_num_shocks | trials(rounds_survived) ~ 0 + Intercept + Condition,
## prior = c(prior(normal(0, 1), class = b)),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## seed = 2113)
## }
We set the seed to a random number, to make the results reproducible. Here are the priors that were set for this model.
Let’s look at the results.
## Family: binomial
## Links: mu = logit
## Formula: overall_num_shocks | trials(rounds_survived) ~ 0 + Intercept + Condition
## Data: d2.01 (Number of observations: 160)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -1.36 0.06 -1.47 -1.24 1.00 1562 1870
## Condition -0.04 0.09 -0.21 0.13 1.00 1511 1854
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plotting the parameters.
The Condition parameter is -0.04, and its 95% credible intervals cross 0, implying that the probability of a shock is no different across the two conditions. On the probability scale:
post <- readd(post2.01)
visible_prob <- inv_logit_scaled(post$b_Intercept)
hidden_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
difference <- hidden_prob - visible_prob
quantile(difference,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## -0.03 -0.01 0.02
Trim the dataset again.
We now fit a model to determine if the total amount of cattle lost due to shocks varies between conditions. Again, no random effects are needed because the outcome is independent across conditions (generated stochastically by the game), despite the within-subject design.
## function(d2.02) {
## brm(data = d2.02, family = gaussian,
## total_cattle_lost ~ 0 + Intercept + Condition,
## prior = c(prior(normal(0, 100), class = b, coef = 'Intercept'),
## prior(normal(0, 5), class = b, coef = 'Condition')),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## seed = 2113)
## }
The priors we used.
The results.
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: total_cattle_lost ~ 0 + Intercept + Condition
## Data: d2.02 (Number of observations: 160)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 51.29 2.13 47.09 55.47 1.00 2258 2302
## Condition -6.79 2.77 -12.16 -1.35 1.00 2269 2347
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 20.42 1.17 18.31 22.84 1.00 2446 2197
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plot the parameters.
On average, 7 more cattle were lost in the Hidden condition than the Visible condition.
Get a long-format data frame with binary request decisions over all rounds, for both conditions. If request == NA, player has died and been removed from the game, so we drop those rows.
This leaves us with 2834 request decisions.
We now fit a varying intercept and slope model, with participants nested within groups. We allow the slopes for both round number and condition to vary, to fit with the experiment’s within-subjects design (participants completed multiple rounds, in both conditions).
## function(d2.03) {
## brm(data = d2.03, family = bernoulli,
## request ~ 0 + Intercept + round_number + Condition
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## sample_prior = TRUE,
## control = list(adapt_delta = 0.99),
## seed = 2113)
## }
Here are the priors for the model we just fitted.
Now let’s see the results.
## Family: bernoulli
## Links: mu = logit
## Formula: request ~ 0 + Intercept + round_number + Condition + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.03 (Number of observations: 2834)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.63 0.28 0.07 1.15 1.01 448 807
## sd(round_number) 0.05 0.02 0.01 0.08 1.01 388 408
## sd(Condition) 1.62 0.32 1.01 2.29 1.00 993 1153
## cor(Intercept,round_number) -0.37 0.40 -0.88 0.63 1.01 499 940
## cor(Intercept,Condition) -0.24 0.34 -0.80 0.56 1.01 373 361
## cor(round_number,Condition) 0.35 0.30 -0.31 0.87 1.02 312 396
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.71 0.25 0.22 1.18 1.01 376 714
## sd(round_number) 0.04 0.02 0.00 0.08 1.01 271 980
## sd(Condition) 1.05 0.31 0.50 1.71 1.01 644 1328
## cor(Intercept,round_number) -0.44 0.42 -0.91 0.66 1.00 624 1242
## cor(Intercept,Condition) -0.33 0.32 -0.82 0.41 1.01 500 1238
## cor(round_number,Condition) 0.16 0.40 -0.67 0.85 1.01 422 960
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -1.00 0.18 -1.37 -0.66 1.00 3278 2683
## round_number -0.04 0.01 -0.07 -0.02 1.00 2897 2569
## Condition 0.63 0.31 0.03 1.22 1.00 2935 2619
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plotting the parameters.
Finally, let’s see the trace plots.
Looks like Stan sampled efficiently.
In this model, the fixed effect of condition is 0.63, with 95% credible intervals above 0. This implies that the participants are more likely to request from their partner in the Hidden condition. Converting to the probability scale:
post <- readd(post2.03)
visible_prob <- inv_logit_scaled(post$b_Intercept)
visible_prob %>%
median() %>%
round(2)
## [1] 0.27
hidden_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_prob %>%
median() %>%
round(2)
## [1] 0.41
difference <- hidden_prob - visible_prob
quantile(difference,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## 0.01 0.14 0.28
The absolute probability difference between the conditions is +0.14 (median), with 95% CIs above 0. Participants were more likely to request from their partner in the Hidden condition.
We compute a Bayes factor for this difference between probabilities. This is the inverse of the alternative hypothesis that the two conditions are equal.
## [1] 2.84
This Bayes factor implies weak support for the hypothesis that the probabilities differ across conditions.
Get a long-format data frame with binary request decisions over all rounds, for both conditions. If request == NA, player has died and been removed from the game, so we drop those rows. However, we also filter out rows in which the player was below the minimum survival threshold (64 cattle).
This leaves us with 2368 request decisions.
We now fit a varying intercept and slope model, with participants nested within groups. Again, we allow the slopes for both round number and condition to vary.
## function(d2.04) {
## brm(data = d2.04, family = bernoulli,
## request ~ 0 + Intercept + round_number + Condition
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## sample_prior = TRUE,
## control = list(adapt_delta = 0.99),
## seed = 2113)
## }
The priors.
The results.
## Family: bernoulli
## Links: mu = logit
## Formula: request ~ 0 + Intercept + round_number + Condition + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.04 (Number of observations: 2368)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.12 0.47 0.16 2.03 1.00 548 707
## sd(round_number) 0.08 0.03 0.01 0.14 1.01 460 777
## sd(Condition) 1.93 0.75 0.27 3.33 1.01 548 889
## cor(Intercept,round_number) -0.14 0.43 -0.81 0.77 1.01 600 1478
## cor(Intercept,Condition) -0.04 0.41 -0.75 0.77 1.00 674 1477
## cor(round_number,Condition) 0.33 0.38 -0.60 0.89 1.00 648 870
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.54 0.36 0.88 2.28 1.00 718 1407
## sd(round_number) 0.09 0.03 0.04 0.14 1.00 546 901
## sd(Condition) 2.29 0.64 1.19 3.65 1.01 663 1381
## cor(Intercept,round_number) -0.60 0.24 -0.92 0.03 1.01 947 1330
## cor(Intercept,Condition) -0.26 0.26 -0.71 0.27 1.00 935 1533
## cor(round_number,Condition) 0.51 0.29 -0.17 0.93 1.00 444 982
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -1.98 0.32 -2.64 -1.38 1.00 2653 2709
## round_number -0.10 0.03 -0.16 -0.05 1.00 2649 2713
## Condition 0.19 0.49 -0.83 1.09 1.00 3187 3003
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Creating a forest plot of parameters.
Trace plots.
Looks like Stan sampled efficiently.
In this model, the fixed effect of condition is 0.19, with 95% credible intervals crossing 0. This implies that, at least when above the minimum survival threshold, the average probability of requesting did not differ between conditions.
Converting the fixed effects to the probability scale:
post <- readd(post2.04a)
visible_prob <- inv_logit_scaled(post$b_Intercept)
visible_prob %>%
median() %>%
round(2)
## [1] 0.12
hidden_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_prob %>%
median() %>%
round(2)
## [1] 0.14
difference <- hidden_prob - visible_prob
quantile(difference,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## -0.08 0.02 0.17
The absolute probability difference between the conditions is +0.02 (median), with 95% CIs above 0. Participants were no more likely to request from their partner in the Hidden condition.
We compute a Bayes factor for this difference between probabilities.
## [1] 0.32
This Bayes factor implies moderate support for the hypothesis that the probabilities are equal in each condition.
We found no effect of condition in the previous model. Is this because of order effects?
## function(d2.04) {
## brm(data = d2.04, family = bernoulli,
## request ~ 0 + Intercept + round_number + Condition*Counterbalancing
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## sample_prior = TRUE,
## control = list(adapt_delta = 0.99),
## seed = 2112)
## }
The priors.
The results.
## Family: bernoulli
## Links: mu = logit
## Formula: request ~ 0 + Intercept + round_number + Condition * Counterbalancing + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.04 (Number of observations: 2368)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.15 0.49 0.12 2.10 1.01 509 479
## sd(round_number) 0.08 0.03 0.01 0.14 1.00 686 708
## sd(Condition) 1.26 0.68 0.10 2.61 1.01 546 1365
## cor(Intercept,round_number) -0.14 0.42 -0.79 0.76 1.00 732 1460
## cor(Intercept,Condition) -0.14 0.45 -0.87 0.80 1.00 978 1866
## cor(round_number,Condition) 0.26 0.42 -0.72 0.91 1.01 769 1287
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.51 0.36 0.85 2.27 1.00 791 2019
## sd(round_number) 0.08 0.03 0.03 0.14 1.00 779 1350
## sd(Condition) 2.18 0.55 1.21 3.35 1.01 823 1925
## cor(Intercept,round_number) -0.61 0.24 -0.92 0.02 1.00 1233 1882
## cor(Intercept,Condition) -0.28 0.24 -0.69 0.24 1.00 1305 1860
## cor(round_number,Condition) 0.55 0.27 -0.07 0.94 1.00 598 1279
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -2.08 0.41 -2.89 -1.32 1.00 3272 3189
## round_number -0.10 0.03 -0.15 -0.05 1.00 3064 2993
## Condition -0.39 0.50 -1.37 0.58 1.00 3497 3073
## Counterbalancing 0.19 0.49 -0.79 1.13 1.00 3024 2988
## Condition:Counterbalancing 1.58 0.62 0.31 2.74 1.00 3033 2979
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Creating a forest plot of parameters.
Trace plots.
Looks like Stan sampled efficiently.
In this model, the interaction parameter is 1.55, with 95% CIs above zero. This implies that there is an interaction effect. Condition only has an effect on cheating when participants played the Hidden game first.
Converting the fixed effects to the probability scale:
post <- readd(post2.04b)
visible_VisibleFirst_prob <- inv_logit_scaled(post$b_Intercept)
visible_VisibleFirst_prob %>%
median() %>%
round(2)
## [1] 0.11
hidden_VisibleFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_VisibleFirst_prob %>%
median() %>%
round(2)
## [1] 0.08
difference_VisibleFirst <- hidden_VisibleFirst_prob - visible_VisibleFirst_prob
quantile(difference_VisibleFirst,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## -0.11 -0.03 0.06
When participants play the Visible game first, the absolute probability difference between the conditions is -0.03 (median), with 95% CIs crossing 0. Participants were no more likely to request from their partner in the Hidden condition.
We compute a Bayes factor for this difference between probabilities.
## [1] 0.31
This Bayes factor implies moderate support for the hypothesis that the probabilities are equal in each condition.
What about for when participants play the Hidden game first?
visible_HiddenFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Counterbalancing)
visible_HiddenFirst_prob %>%
median() %>%
round(2)
## [1] 0.13
hidden_HiddenFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Counterbalancing +
post$b_Condition + post$`b_Condition:Counterbalancing`)
hidden_HiddenFirst_prob %>%
median() %>%
round(2)
## [1] 0.34
difference_HiddenFirst <- hidden_HiddenFirst_prob - visible_HiddenFirst_prob
quantile(difference_HiddenFirst,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## 0.01 0.20 0.43
When participants play the Hidden game first, the absolute probability difference between the conditions is +0.20 (median), with 95% CIs above 0. Participants are more likely to request from their partner when beneath the threshold in the Hidden condition.
We compute a Bayes factor for this difference between probabilities.
## [1] 3.07
This Bayes factor implies moderate support for the hypothesis that the probabilities differ across conditions.
Get long-format data frame with ‘received’ variable (i.e. how much a player received on any given round). We swap this around so it reflects how much the player gave to their partner (i.e. how much their partner received). We drop rows with NAs, since partners did not request help in that particular round. We then code whether the player gave nothing in response to the request (1) or gave at least one cattle (0).
This leaves us with 716 possible responses to requests.
We then fit the varying intercept and slope model, again with participants nested within groups.
## function(d2.05) {
## brm(data = d2.05, family = bernoulli,
## notResponded ~ 0 + Intercept + round_number + Condition
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## iter = 2500, warmup = 1000, chains = 4, cores = 4,
## sample_prior = TRUE,
## control = list(adapt_delta = 0.99),
## seed = 2113)
## }
The priors we used.
The results.
## Family: bernoulli
## Links: mu = logit
## Formula: notResponded ~ 0 + Intercept + round_number + Condition + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.05 (Number of observations: 716)
## Samples: 4 chains, each with iter = 2500; warmup = 1000; thin = 1;
## total post-warmup samples = 6000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.04 0.48 0.11 2.03 1.00 1087 897
## sd(round_number) 0.09 0.04 0.01 0.18 1.00 649 871
## sd(Condition) 1.19 0.56 0.16 2.33 1.00 538 853
## cor(Intercept,round_number) 0.09 0.42 -0.68 0.85 1.00 683 1140
## cor(Intercept,Condition) -0.40 0.41 -0.92 0.62 1.00 709 1490
## cor(round_number,Condition) 0.37 0.41 -0.60 0.93 1.00 991 1565
##
## ~Group:ID (Number of levels: 78)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.52 0.41 0.02 1.49 1.00 664 1547
## sd(round_number) 0.03 0.02 0.00 0.09 1.00 1961 2966
## sd(Condition) 0.55 0.45 0.02 1.65 1.00 595 1337
## cor(Intercept,round_number) -0.09 0.51 -0.90 0.87 1.00 2710 3871
## cor(Intercept,Condition) -0.41 0.50 -0.98 0.74 1.00 987 2731
## cor(round_number,Condition) -0.05 0.50 -0.90 0.87 1.00 4005 4462
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -1.70 0.37 -2.49 -1.01 1.00 2261 3752
## round_number -0.06 0.04 -0.14 -0.00 1.00 1347 2773
## Condition 0.49 0.38 -0.28 1.21 1.00 2662 3868
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plotting the parameters.
Let’s look at the trace plots to make sure Stan sampled efficiently.
Rhat values, n_eff values, and trace plots look okay.
The fixed effect of condition is 0.49, with 95% credible intervals that cross 0. This implies that participants were no more likely to not respond to requests in the Hidden condition.
As before, we sample from the posterior and convert this difference to the absolute probability scale.
post <- readd(post2.05)
visible_prob <- inv_logit_scaled(post$b_Intercept)
visible_prob %>%
median() %>%
round(2)
## [1] 0.16
hidden_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_prob %>%
median() %>%
round(2)
## [1] 0.23
difference <- hidden_prob - visible_prob
quantile(difference,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## -0.04 0.07 0.19
The probability difference between the conditions is 0.07 (median), with 95% CIs crossing 0.
We compute a Bayes factor for this difference between probabilities.
## [1] 0.75
This Bayes factor implies anecdotal support for the hypothesis that the probabilities are equal across conditions.
Again, the data wrangling for this model is a little trickier.
This leaves us with 473 possible response decisions in which the player was able to give their partner what they asked for. Our outcome variable is whether they fulfilled that request or not.
We now fit the varying intercept and slope model, again with participants nested within groups.
## function(d2.06) {
## brm(data = d2.06, family = bernoulli,
## notFulfilled ~ 0 + Intercept + round_number + Condition
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## control = list(adapt_delta = 0.99),
## sample_prior = TRUE,
## iter = 2500, warmup = 1000, chains = 4, cores = 4,
## seed = 2113)
## }
Here are the priors we used.
Let’s see the results.
## Family: bernoulli
## Links: mu = logit
## Formula: notFulfilled ~ 0 + Intercept + round_number + Condition + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.06 (Number of observations: 473)
## Samples: 4 chains, each with iter = 2500; warmup = 1000; thin = 1;
## total post-warmup samples = 6000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.22 0.56 0.15 2.38 1.00 1367 1065
## sd(round_number) 0.16 0.07 0.04 0.33 1.00 915 1261
## sd(Condition) 0.81 0.61 0.03 2.27 1.00 1813 2487
## cor(Intercept,round_number) 0.21 0.40 -0.57 0.90 1.00 1394 1378
## cor(Intercept,Condition) -0.05 0.49 -0.88 0.86 1.00 3517 3473
## cor(round_number,Condition) 0.22 0.47 -0.76 0.93 1.00 3569 4125
##
## ~Group:ID (Number of levels: 73)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.64 0.47 0.02 1.77 1.00 1628 2715
## sd(round_number) 0.08 0.05 0.01 0.19 1.00 1509 2655
## sd(Condition) 0.77 0.56 0.03 2.09 1.00 1428 2773
## cor(Intercept,round_number) -0.19 0.49 -0.92 0.82 1.00 2488 3703
## cor(Intercept,Condition) -0.30 0.49 -0.95 0.77 1.00 2710 3777
## cor(round_number,Condition) 0.07 0.50 -0.86 0.89 1.00 2855 4481
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.54 0.39 -1.34 0.19 1.00 3069 4056
## round_number -0.13 0.06 -0.26 -0.04 1.00 2610 3225
## Condition 0.94 0.40 0.12 1.68 1.00 4551 4170
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plotting the parameters.
Let’s look at the trace plots to make sure Stan sampled efficiently.
Stan sampled efficiently.
The fixed effect of condition is 0.94, with 95% credible intervals above 0. This implies that participants were more likely to not fulfill requests (when able) in the Hidden condition.
On the absolute probability scale.
post <- readd(post2.06a)
visible_prob <- inv_logit_scaled(post$b_Intercept)
visible_prob %>%
median() %>%
round(2)
## [1] 0.37
hidden_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_prob %>%
median() %>%
round(2)
## [1] 0.6
difference <- hidden_prob - visible_prob
quantile(difference,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## 0.03 0.23 0.39
The probability difference between the conditions is 0.23 (median), with 95% CIs above zero.
We compute a Bayes factor for this difference between probabilities.
## [1] 6.44
This Bayes factor implies moderate support for the hypothesis that the probabilities differ across conditions.
As before, we test for order effects by including an interaction.
## function(d2.06) {
## brm(data = d2.06, family = bernoulli,
## notFulfilled ~ 0 + Intercept + round_number + Condition*Counterbalancing
## + (0 + Intercept + round_number + Condition | Group/ID),
## prior = c(prior(normal(0, 1), class = b),
## prior(student_t(3, 0, 10), class = sd),
## prior(lkj(1), class = cor)),
## control = list(adapt_delta = 0.99),
## sample_prior = TRUE,
## iter = 2500, warmup = 1000, chains = 4, cores = 4,
## seed = 2113)
## }
Here are the priors we used.
Let’s see the results.
## Family: bernoulli
## Links: mu = logit
## Formula: notFulfilled ~ 0 + Intercept + round_number + Condition * Counterbalancing + (0 + Intercept + round_number + Condition | Group/ID)
## Data: d2.06 (Number of observations: 473)
## Samples: 4 chains, each with iter = 2500; warmup = 1000; thin = 1;
## total post-warmup samples = 6000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 1.17 0.56 0.15 2.37 1.01 1375 1383
## sd(round_number) 0.17 0.07 0.04 0.33 1.01 1037 1258
## sd(Condition) 0.83 0.63 0.04 2.33 1.00 1061 2277
## cor(Intercept,round_number) 0.22 0.38 -0.53 0.88 1.00 1118 2273
## cor(Intercept,Condition) -0.09 0.49 -0.88 0.86 1.00 2739 3326
## cor(round_number,Condition) 0.23 0.46 -0.77 0.91 1.00 2979 4584
##
## ~Group:ID (Number of levels: 73)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.70 0.49 0.03 1.86 1.00 1062 1754
## sd(round_number) 0.08 0.05 0.00 0.20 1.00 1089 2000
## sd(Condition) 0.86 0.61 0.04 2.29 1.00 1190 2541
## cor(Intercept,round_number) -0.19 0.50 -0.92 0.82 1.00 1819 3484
## cor(Intercept,Condition) -0.32 0.49 -0.95 0.77 1.00 1889 3232
## cor(round_number,Condition) 0.12 0.50 -0.84 0.90 1.00 2692 3945
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.76 0.44 -1.67 0.08 1.00 4018 4474
## round_number -0.13 0.05 -0.25 -0.04 1.00 2512 3362
## Condition 0.63 0.51 -0.42 1.58 1.00 4191 4119
## Counterbalancing 0.44 0.55 -0.67 1.49 1.00 4565 4552
## Condition:Counterbalancing 0.48 0.59 -0.67 1.66 1.00 5237 4527
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plotting the parameters.
The interaction parameter has 95% CIs that cross zero, indicating no interaction effect.
Converting the fixed effects to the probability scale:
post <- readd(post2.06b)
visible_VisibleFirst_prob <- inv_logit_scaled(post$b_Intercept)
visible_VisibleFirst_prob %>%
median() %>%
round(2)
## [1] 0.32
hidden_VisibleFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Condition)
hidden_VisibleFirst_prob %>%
median() %>%
round(2)
## [1] 0.47
difference_VisibleFirst <- hidden_VisibleFirst_prob - visible_VisibleFirst_prob
quantile(difference_VisibleFirst,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## -0.09 0.15 0.36
When participants play the Visible game first, the absolute probability difference between the conditions is +0.15 (median), with 95% CIs crossing 0. Participants were no more likely to not fulfill their partner’s request in the Hidden condition.
We compute a Bayes factor for this difference between probabilities.
## [1] 1.38
This Bayes factor implies anecdotal support for the hypothesis that the probabilities are different across conditions.
What about for when participants play the Hidden game first?
visible_HiddenFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Counterbalancing)
visible_HiddenFirst_prob %>%
median() %>%
round(2)
## [1] 0.43
hidden_HiddenFirst_prob <- inv_logit_scaled(post$b_Intercept + post$b_Counterbalancing +
post$b_Condition + post$`b_Condition:Counterbalancing`)
hidden_HiddenFirst_prob %>%
median() %>%
round(2)
## [1] 0.69
difference_HiddenFirst <- hidden_HiddenFirst_prob - visible_HiddenFirst_prob
quantile(difference_HiddenFirst,c(0.025,0.5,0.975)) %>% round(2)
## 2.5% 50% 97.5%
## 0.03 0.26 0.46
When participants play the Hidden game first, the absolute probability difference between the conditions is +0.26 (median), with 95% CIs above 0. Participants are more likely to not fulfill their partner’s request in the Hidden condition.
We compute a Bayes factor for this difference between probabilities.
## [1] 6.88
This Bayes factor implies moderate support for the hypothesis that the probabilities differ across conditions.
Get the data.
Fit the model.
## function(d2.07) {
## brm(rounds_survived | cens(censored) ~ 0 + Intercept + Condition*Counterbalancing +
## (1 | Group/ID),
## data = d2.07, family = weibull, inits = "0",
## prior = c(prior(normal(0, 2), class = b, coef = "Condition"),
## prior(normal(0, 2), class = b, coef = "Counterbalancing"),
## prior(normal(0, 2), class = b, coef = "Condition:Counterbalancing")),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.9, max_treedepth = 15),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: weibull
## Links: mu = log; shape = identity
## Formula: rounds_survived | cens(censored) ~ 0 + Intercept + Condition * Counterbalancing + (1 | Group/ID)
## Data: d2.07 (Number of observations: 160)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.45 0.26 0.03 1.00 1.00 992 1412
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.28 0.21 0.01 0.76 1.00 1376 1936
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 4.62 0.43 3.92 5.56 1.00 3303 2490
## Condition -0.66 0.41 -1.51 0.12 1.00 3537 2791
## Counterbalancing 0.20 0.47 -0.72 1.09 1.00 3302 3098
## Condition:Counterbalancing -0.41 0.55 -1.52 0.68 1.00 3561 3112
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## shape 1.03 0.14 0.78 1.33 1.00 3994 2937
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plot parameters.
Trace plots.
For participants who play the Visible game first, estimate survival rate in Visible condition.
post <- readd(post2.07)
set.seed(2113)
srateVisible <- rweibull(4000, post$shape, exp(post$b_Intercept))
median(srateVisible)
## [1] 64.80291
Estimate survival rate in Hidden condition.
set.seed(2113)
srateHidden <- rweibull(4000, post$shape, exp(post$b_Intercept + post$b_Condition))
median(srateHidden)
## [1] 34.57047
difference <- srateHidden - srateVisible
quantile(difference, c(0.025, 0.5, 0.975))
## 2.5% 50% 97.5%
## -299.442155 -25.623587 4.997195
For participants who play the Hidden game first, estimate survival rate in Visible condition.
set.seed(2113)
srateVisible <- rweibull(4000, post$shape, exp(post$b_Intercept + post$b_Counterbalancing))
median(srateVisible)
## [1] 80.88703
Estimate survival rate in Hidden condition.
set.seed(2113)
srateHidden <- rweibull(4000, post$shape, exp(post$b_Intercept + post$b_Condition + post$b_Counterbalancing + post$`b_Condition:Counterbalancing`))
median(srateHidden)
## [1] 28.28715
difference <- srateHidden - srateVisible
quantile(difference, c(0.025, 0.5, 0.975))
## 2.5% 50% 97.5%
## -489.127168 -47.145856 -1.177297
Load data.
Fit the model.
## function(d2.08) {
## brm(rounds_survived | cens(censored) ~ 0 + Intercept + prop_rule1 + (1 | Group/ID),
## data = d2.08, family = weibull, inits = "0",
## prior = prior(normal(0, 2), class = b, coef = "prop_rule1"),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.99),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: weibull
## Links: mu = log; shape = identity
## Formula: rounds_survived | cens(censored) ~ 0 + Intercept + prop_rule1 + (1 | Group/ID)
## Data: d2.08 (Number of observations: 154)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.29 0.20 0.01 0.73 1.00 1700 2059
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.23 0.17 0.01 0.62 1.00 1997 2183
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 4.10 0.27 3.66 4.69 1.00 3870 2832
## prop_rule1 0.49 0.55 -0.52 1.66 1.00 6158 2010
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## shape 1.13 0.16 0.84 1.45 1.00 3895 2973
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plot parameters.
Trace plots.
Load data.
Fit the model.
## function(d2.09) {
## brm(rounds_survived | cens(censored) ~ 0 + Intercept + prop_rule2 + (1 | Group/ID),
## data = d2.09, family = weibull, inits = "0",
## prior = prior(normal(0, 2), class = b, coef = "prop_rule2"),
## iter = 2000, warmup = 1000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.99),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: weibull
## Links: mu = log; shape = identity
## Formula: rounds_survived | cens(censored) ~ 0 + Intercept + prop_rule2 + (1 | Group/ID)
## Data: d2.09 (Number of observations: 117)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.28 0.21 0.01 0.80 1.00 1722 2106
##
## ~Group:ID (Number of levels: 73)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.28 0.21 0.01 0.78 1.00 1395 1864
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 4.26 0.36 3.71 5.09 1.00 3857 2367
## prop_rule2 -0.18 0.41 -0.99 0.64 1.00 4885 2980
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## shape 1.39 0.26 0.93 1.95 1.00 4738 2646
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Plot parameters.
Trace plots.
Get data.
Fit the model.
## function(d2.10) {
## brm(herd_size_after_shock ~ 0 + Intercept + round_number + Condition*Counterbalancing*request +
## (1 + round_number + Condition | Group/ID),
## data = d2.10,
## prior = c(prior(normal(0, 100), class = b, coef = 'Intercept'),
## prior(normal(0, 5), class = b, coef = 'round_number'),
## prior(normal(0, 5), class = b, coef = 'Condition'),
## prior(normal(0, 5), class = b, coef = 'Condition:Counterbalancing'),
## prior(normal(0, 5), class = b, coef = 'Condition:Counterbalancing:request'),
## prior(normal(0, 5), class = b, coef = 'Condition:request'),
## prior(normal(0, 5), class = b, coef = 'Counterbalancing'),
## prior(normal(0, 5), class = b, coef = 'Counterbalancing:request'),
## prior(normal(0, 5), class = b, coef = 'request')),
## iter = 4000, warmup = 2000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.95),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: herd_size_after_shock ~ 0 + Intercept + round_number + Condition * Counterbalancing * request + (1 + round_number + Condition | Group/ID)
## Data: d2.10 (Number of observations: 2834)
## Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
## total post-warmup samples = 8000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 2.21 1.35 0.12 5.07 1.01 719 1827
## sd(round_number) 0.21 0.11 0.01 0.42 1.00 757 1949
## sd(Condition) 2.58 1.72 0.12 6.43 1.01 583 1281
## cor(Intercept,round_number) 0.05 0.47 -0.83 0.87 1.00 1153 2447
## cor(Intercept,Condition) -0.23 0.50 -0.93 0.81 1.00 1129 2476
## cor(round_number,Condition) -0.10 0.48 -0.89 0.82 1.00 1725 4129
##
## ~Group:ID (Number of levels: 80)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 7.79 0.85 6.21 9.55 1.00 1767 2686
## sd(round_number) 0.47 0.06 0.37 0.60 1.00 1723 3542
## sd(Condition) 10.59 1.07 8.59 12.83 1.00 1557 3205
## cor(Intercept,round_number) -0.10 0.15 -0.39 0.19 1.00 1944 3128
## cor(Intercept,Condition) -0.76 0.07 -0.87 -0.61 1.00 2180 4036
## cor(round_number,Condition) 0.08 0.15 -0.20 0.37 1.00 2098 3500
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 70.93 1.34 68.34 73.50 1.00 2333 4090
## round_number 0.24 0.07 0.10 0.38 1.00 4011 5093
## Condition 0.79 1.71 -2.57 4.23 1.00 2694 4438
## Counterbalancing 1.44 1.79 -2.13 4.95 1.00 2624 4274
## request -9.21 0.72 -10.61 -7.78 1.00 6086 5949
## Condition:Counterbalancing 0.62 2.31 -3.78 5.16 1.00 3019 4763
## Condition:request 0.12 1.16 -2.14 2.40 1.00 5351 6240
## Counterbalancing:request 0.01 0.98 -1.91 1.90 1.00 6041 6167
## Condition:Counterbalancing:request 0.48 1.56 -2.63 3.51 1.00 5524 6231
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 7.87 0.11 7.65 8.09 1.00 12817 5734
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Trace plots.
We focus on participants who played the Hidden game first.
For participants in the Visible condition, posterior herd size when not requesting:
## [1] 72.38
And when requesting:
## [1] 63.16
For participants in the Hidden condition, posterior herd size when not requesting:
## [1] 73.78
And when requesting:
## [1] 65.17
Does the herd size when requesting differ across conditions? Get the posterior difference.
## 2.5% 50% 97.5%
## -1.81 2.03 5.71
No.
Get data. We log the amount requested.
Fit the model.
## function(d2.11) {
## brm(request_amount.log ~ 0 + Intercept + round_number + Condition*Counterbalancing +
## (1 + round_number + Condition | Group/ID),
## data = d2.11,
## prior = c(prior(normal(0, 5), class = b, coef = 'Intercept'),
## prior(normal(0, 2), class = b, coef = 'round_number'),
## prior(normal(0, 2), class = b, coef = 'Condition'),
## prior(normal(0, 2), class = b, coef = 'Condition:Counterbalancing'),
## prior(normal(0, 2), class = b, coef = 'Counterbalancing')),
## iter = 4000, warmup = 2000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.95),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: request_amount.log ~ 0 + Intercept + round_number + Condition * Counterbalancing + (1 + round_number + Condition | Group/ID)
## Data: d2.11 (Number of observations: 708)
## Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
## total post-warmup samples = 8000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.57 0.12 0.35 0.83 1.00 2988 3110
## sd(round_number) 0.04 0.01 0.02 0.06 1.00 2573 3366
## sd(Condition) 0.67 0.14 0.42 0.96 1.00 2576 3045
## cor(Intercept,round_number) 0.14 0.30 -0.41 0.76 1.00 1567 2270
## cor(Intercept,Condition) -0.68 0.16 -0.92 -0.29 1.00 2125 2933
## cor(round_number,Condition) -0.57 0.23 -0.94 -0.05 1.00 1791 2783
##
## ~Group:ID (Number of levels: 78)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.25 0.13 0.02 0.52 1.00 1263 2520
## sd(round_number) 0.02 0.01 0.00 0.04 1.00 1080 2427
## sd(Condition) 0.33 0.13 0.06 0.59 1.00 885 1288
## cor(Intercept,round_number) -0.38 0.48 -0.95 0.75 1.00 1453 3031
## cor(Intercept,Condition) -0.37 0.45 -0.93 0.72 1.00 902 2389
## cor(round_number,Condition) 0.41 0.43 -0.63 0.96 1.00 1129 1680
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 1.20 0.17 0.87 1.52 1.00 3181 4649
## round_number 0.02 0.01 -0.00 0.03 1.00 5954 6437
## Condition 0.13 0.19 -0.24 0.51 1.00 3840 4651
## Counterbalancing -0.09 0.23 -0.55 0.36 1.00 3160 4132
## Condition:Counterbalancing 0.07 0.26 -0.45 0.58 1.00 4048 4984
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.71 0.02 0.67 0.75 1.00 4772 5208
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Trace plots.
We focus on participants who played the Hidden game first.
How many cattle are requested on average by participants in the Visible condition?
## [1] 3.04
And in the Hidden condition?
## [1] 3.7
Do these amounts differ?
quantile(exp(post$b_Intercept + post$b_Counterbalancing) - exp(post$b_Intercept + post$b_Condition + post$b_Counterbalancing + post$`b_Condition:Counterbalancing`), c(.025, .5, .975)) %>% round(2)
## 2.5% 50% 97.5%
## -1.91 -0.66 0.60
Not quite.
Get data. We calculate diff as the difference between the amount requested and the amount needed to reach the minimum survival threshold.
Fit the model.
## function(d2.12) {
## # deal with outliers by using student-t distribution
## # https://solomonkurz.netlify.app/post/robust-linear-regression-with-the-robust-student-s-t-distribution/
## brm(bf(diff ~ 0 + Intercept + round_number + Condition*Counterbalancing +
## (1 + round_number + Condition | Group/ID), nu = 4),
## data = d2.12, family = student,
## prior = c(prior(normal(0, 5), class = b, coef = 'Intercept'),
## prior(normal(0, 2), class = b, coef = 'round_number'),
## prior(normal(0, 2), class = b, coef = 'Condition'),
## prior(normal(0, 2), class = b, coef = 'Condition:Counterbalancing'),
## prior(normal(0, 2), class = b, coef = 'Counterbalancing')),
## iter = 4000, warmup = 2000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.95),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: student
## Links: mu = identity; sigma = identity; nu = identity
## Formula: diff ~ 0 + Intercept + round_number + Condition * Counterbalancing + (1 + round_number + Condition | Group/ID)
## nu = 4
## Data: d2.12 (Number of observations: 335)
## Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
## total post-warmup samples = 8000
##
## Group-Level Effects:
## ~Group (Number of levels: 39)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.72 0.46 0.04 1.75 1.00 1862 3682
## sd(round_number) 0.08 0.04 0.01 0.16 1.00 1105 2317
## sd(Condition) 0.67 0.46 0.03 1.70 1.00 2385 3593
## cor(Intercept,round_number) 0.17 0.46 -0.77 0.91 1.00 1835 3001
## cor(Intercept,Condition) -0.17 0.49 -0.92 0.83 1.00 4734 5387
## cor(round_number,Condition) -0.02 0.49 -0.87 0.86 1.00 5069 6229
##
## ~Group:ID (Number of levels: 68)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.51 0.36 0.02 1.31 1.00 1933 4055
## sd(round_number) 0.05 0.03 0.00 0.12 1.00 1839 3503
## sd(Condition) 0.69 0.45 0.03 1.68 1.00 1960 4219
## cor(Intercept,round_number) -0.00 0.50 -0.88 0.87 1.00 3633 4595
## cor(Intercept,Condition) -0.07 0.50 -0.90 0.85 1.00 3446 4368
## cor(round_number,Condition) 0.08 0.49 -0.83 0.89 1.00 4081 5124
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.15 0.45 -1.05 0.75 1.00 6457 5989
## round_number -0.02 0.03 -0.09 0.04 1.00 8955 5334
## Condition -0.06 0.48 -1.01 0.88 1.00 6460 6091
## Counterbalancing 0.24 0.61 -0.92 1.44 1.00 5687 5582
## Condition:Counterbalancing 0.61 0.71 -0.76 1.98 1.00 6781 6579
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 2.14 0.13 1.90 2.42 1.00 9416 5973
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Trace plots.
We focus on participants who played the Hidden game first.
What’s the posterior diff score for participants in the Visible condition? A positive score indicates asking for “too many” resources, while a negative number indicates asking for “too few”.
## 2.5% 50% 97.5%
## -0.92 0.09 1.14
And for participants in the Hidden condition?
## 2.5% 50% 97.5%
## -0.34 0.62 1.64
Get data. We calculate diff as the difference between the amount requested by one’s partner and the amount given.
Fit the model.
## function(d2.13) {
## # deal with outliers by using student-t distribution
## # https://solomonkurz.netlify.app/post/robust-linear-regression-with-the-robust-student-s-t-distribution/
## brm(bf(diff ~ 0 + Intercept + round_number + Condition*Counterbalancing +
## (1 + round_number + Condition | Group/ID), nu = 4),
## data = d2.13, family = student,
## prior = c(prior(normal(0, 5), class = b, coef = 'Intercept'),
## prior(normal(0, 2), class = b, coef = 'round_number'),
## prior(normal(0, 2), class = b, coef = 'Condition'),
## prior(normal(0, 2), class = b, coef = 'Condition:Counterbalancing'),
## prior(normal(0, 2), class = b, coef = 'Counterbalancing')),
## iter = 4000, warmup = 2000, chains = 4, cores = 4,
## control = list(adapt_delta = 0.95),
## sample_prior = TRUE, seed = 2113)
## }
What priors did we use?
Results.
## Family: student
## Links: mu = identity; sigma = identity; nu = identity
## Formula: diff ~ 0 + Intercept + round_number + Condition * Counterbalancing + (1 + round_number + Condition | Group/ID)
## nu = 4
## Data: d2.13 (Number of observations: 708)
## Samples: 4 chains, each with iter = 4000; warmup = 2000; thin = 1;
## total post-warmup samples = 8000
##
## Group-Level Effects:
## ~Group (Number of levels: 40)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.75 0.23 0.33 1.25 1.00 3387 4333
## sd(round_number) 0.02 0.01 0.00 0.05 1.00 2236 3940
## sd(Condition) 1.39 0.37 0.70 2.15 1.00 3026 3085
## cor(Intercept,round_number) -0.17 0.49 -0.90 0.80 1.00 5628 5966
## cor(Intercept,Condition) 0.41 0.31 -0.24 0.92 1.00 1806 3622
## cor(round_number,Condition) -0.01 0.48 -0.86 0.85 1.01 857 2699
##
## ~Group:ID (Number of levels: 78)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.20 0.16 0.01 0.58 1.00 3426 3890
## sd(round_number) 0.01 0.01 0.00 0.04 1.00 3953 4162
## sd(Condition) 0.57 0.39 0.02 1.46 1.00 1312 2326
## cor(Intercept,round_number) -0.14 0.50 -0.92 0.84 1.00 7191 5806
## cor(Intercept,Condition) -0.13 0.50 -0.92 0.84 1.00 2621 4391
## cor(round_number,Condition) -0.01 0.50 -0.88 0.87 1.00 3179 5603
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept -0.46 0.28 -1.03 0.08 1.00 4980 5877
## round_number 0.01 0.01 -0.01 0.04 1.00 8548 6353
## Condition -0.85 0.46 -1.75 0.04 1.00 4718 4494
## Counterbalancing -0.14 0.34 -0.79 0.57 1.00 5032 5456
## Condition:Counterbalancing -0.95 0.60 -2.14 0.26 1.00 4994 5180
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.89 0.09 1.72 2.07 1.00 10965 6308
##
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Trace plots.
We focus on participants that played the Hidden game first.
What’s the posterior diff score for participants in the Visible condition?
## 2.5% 50% 97.5%
## -1.16 -0.61 -0.02
What about participants in the Hidden condition?
## 2.5% 50% 97.5%
## -3.33 -2.40 -1.47
Difference between these.
quantile(post$b_Condition + post$`b_Condition:Counterbalancing`, c(.025, .5, .975)) %>% round(2)
## 2.5% 50% 97.5%
## -2.66 -1.79 -0.99
sessionInfo()
## R version 3.6.2 (2019-12-12)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows >= 8 x64 (build 9200)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=English_New Zealand.1252 LC_CTYPE=English_New Zealand.1252 LC_MONETARY=English_New Zealand.1252
## [4] LC_NUMERIC=C LC_TIME=English_New Zealand.1252
##
## attached base packages:
## [1] grid stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] forcats_0.4.0 stringr_1.4.0 dplyr_0.8.4 purrr_0.3.3 readr_1.3.1 tidyr_1.0.2
## [7] tibble_2.1.3 tidyverse_1.3.0 png_0.1-7 ggplot2_3.2.1 drake_7.10.0.9000 cowplot_1.0.0
## [13] brms_2.11.6 Rcpp_1.0.3 bayesplot_1.7.1
##
## loaded via a namespace (and not attached):
## [1] colorspace_1.4-1 ellipsis_0.3.0 ggridges_0.5.2 rsconnect_0.8.16 markdown_1.1
## [6] base64enc_0.1-3 fs_1.3.1 rstudioapi_0.11 farver_2.0.3 rstan_2.19.2
## [11] DT_0.12 fansi_0.4.1 mvtnorm_1.0-11 lubridate_1.7.4 xml2_1.2.2
## [16] bridgesampling_0.8-1 knitr_1.28 shinythemes_1.1.2 jsonlite_1.6.1 broom_0.5.4
## [21] dbplyr_1.4.2 shiny_1.4.0 compiler_3.6.2 httr_1.4.1 backports_1.1.5
## [26] assertthat_0.2.1 Matrix_1.2-18 fastmap_1.0.1 lazyeval_0.2.2 cli_2.0.1
## [31] later_1.0.0 htmltools_0.4.0 prettyunits_1.1.1 tools_3.6.2 igraph_1.2.4.2
## [36] coda_0.19-3 gtable_0.3.0 glue_1.3.1 reshape2_1.4.3 cellranger_1.1.0
## [41] vctrs_0.2.3 nlme_3.1-144 crosstalk_1.0.0 xfun_0.12 ps_1.3.2
## [46] rvest_0.3.5 mime_0.9 miniUI_0.1.1.1 lifecycle_0.1.0 gtools_3.8.1
## [51] zoo_1.8-7 scales_1.1.0 colourpicker_1.0 hms_0.5.3 promises_1.1.0
## [56] Brobdingnag_1.2-6 parallel_3.6.2 inline_0.3.15 shinystan_2.5.0 yaml_2.2.1
## [61] gridExtra_2.3 loo_2.2.0 StanHeaders_2.21.0-1 stringi_1.4.6 dygraphs_1.1.1.6
## [66] filelock_1.0.2 pkgbuild_1.0.6 storr_1.2.1 rlang_0.4.4 pkgconfig_2.0.3
## [71] matrixStats_0.55.0 evaluate_0.14 lattice_0.20-38 labeling_0.3 rstantools_2.0.0
## [76] htmlwidgets_1.5.1 processx_3.4.2 tidyselect_1.0.0 plyr_1.8.5 magrittr_1.5
## [81] R6_2.4.1 generics_0.0.2 base64url_1.4 txtq_0.2.0 DBI_1.1.0
## [86] pillar_1.4.3 haven_2.2.0 withr_2.1.2 xts_0.12-0 abind_1.4-5
## [91] modelr_0.1.5 crayon_1.3.4 rmarkdown_2.1 progress_1.2.2 readxl_1.3.1
## [96] callr_3.4.2 threejs_0.3.3 reprex_0.3.0 digest_0.6.23 xtable_1.8-4
## [101] httpuv_1.5.2 stats4_3.6.2 munsell_0.5.0 shinyjs_1.1